The use of Recurrent Neural Networks for video captioning has recently gaineda lot of attention, since they can be used both to encode the input video andto generate the corresponding description. In this paper, we present arecurrent video encoding scheme which can discover and leverage thehierarchical structure of the video. Unlike the classical encoder-decoderapproach, in which a video is encoded continuously by a recurrent layer, wepropose a novel LSTM cell, which can identify discontinuity points betweenframes or segments and modify the temporal connections of the encoding layeraccordingly. We evaluate our approach on three large-scale datasets: theMontreal Video Annotation dataset, the MPII Movie Description dataset and theMicrosoft Video Description Corpus. Experiments show that our approach candiscover appropriate hierarchical representations of input videos and improvethe state of the art results on movie description datasets.
展开▼